Marc Andreessen introspects on The Death of the Browser, Pi + OpenClaw, and Why "This Time Is Different"
Latent Space: The AI Engineer Podcast · 
Fresh off raising a monster $15B, Marc Andreessen has lived through multiple computing platform shifts firsthand, from Mosaic and Netscape to cofounding A16z.
In this episode, Marc joins swyx and Alessio in a16z’s legendary Sand Hill Road office to argue that AI is not just another hype cycle, but the payoff of an “80-year overnight success”: from neural nets and expert systems to transformers, reasoning models, coding, agents, and recursive self-improvement. He lays out why he thinks this moment is different, why AI is finally escaping the old boom-bust pattern, and why the real bottleneck may be less about models than about the messy institutions, incentives, and social systems that struggle to absorb technological change.
This episode was a dream come true for us, and many thanks to Erik Torenberg for the assist in setting this up. Full episode on YouTube!
We discuss:
* Marc’s long view on AI: from the 1980s AI boom and expert systems to AlexNet, transformers, and why he sees today’s moment as the culmination of decades of compounding technical progress
* Why “this time is different”: the jump from LLMs to reasoning, coding, agents, and recursive self-improvement, and why Marc thinks these breakthroughs make AI real in a way prior cycles were not
* AI winters vs. “80-year overnight success”: why the field repeatedly swings between utopianism and doom, and why Marc thinks the underlying researchers were mostly right even when the timelines were wrong
* Scaling laws, Moore’s Law, and what to build: why he believes AI scaling laws will continue, why the outside world is messier than lab purists assume, and how startups can still create durable value on top of rapidly improving models
* The dot-com crash and AI infrastructure risk: Marc’s comparison between today’s AI capex boom and the fiber/data-center overbuild of 2000, plus why he thinks this cycle is different because the buyers are huge cash-rich incumbents and demand is already here
* Why old NVIDIA chips may be getting more valuable: the pace of software progress, chronic capacity shortages, and the idea that even current models are “sandbagged” by supply constraints
* Open source, edge inference, and the chip bottleneck: why Marc thinks local models, Apple Silicon, privacy, trust, and economics all point toward a major role for edge AI
* American vs. Chinese open source AI: DeepSeek as a “gift to the world,” why open models matter not just because they’re free but because they teach the world how things work, and how open source strategies may shift as the market consolidates
* Why Pi and OpenClaw matter so much: Marc’s claim that the combination of LLM + shell + filesystem + markdown + cron loop is one of the biggest software architecture breakthroughs in decades
* Agents as the new “Unix”: how agent state living in files allows portability across models and runtimes, and why self-modifying agents that can extend themselves may redefine what software even is
* The future of coding and programming languages: why Marc thinks software becomes abundant, why bots may translate freely across languages, and why “programming language” itself may stop being a salient concept
* Browsers, protocols, and human readability: lessons from Mosaic and the web, why text protocols and “view source” mattered, and how similar principles may shape AI-native systems
* Real-world OpenClaw use: health dashboards, sleep monitoring, smart homes, rewriting firmware on robot dogs, and why the most aggressive users are discovering both the power and danger of agents first
* Proof of human vs. proof of bot: why Marc thinks the internet’s bot problem is now unsolvable via detection alone, and why biometric + cryptographic proof of human becomes necessary
Timestamps
* 00:00 Marc on AI’s “80-Year Overnight Success”
* 00:01 A Quick Message From swyx
* 01:44 Inside a16z With Marc Andreessen
* 02:13 The Truth About a16z’s AI Pivot
* 03:29 Why This AI Boom Is Not Like 2016
* 06:33 Marc on AI Winters, Hype Cycles, and What’s Different Now
* 10:09 Reasoning, Coding, Agents, and the New AI Breakthroughs
* 12:13 What Founders Should Build as Models Keep Improving
* 16:33 AI Capex, GPU Shortages, and the Dot-Com Crash Analogy
* 24:54 Open Source AI, Edge Inference, and Why It Matters
* 33:03 Why OpenClaw and PI Could Change Software Forever
* 41:37 Agents, the End of Interfaces, and Software for Bots
* 46:47 Do Programming Languages Even Have a Future?
* 54:19 AI Agents Need Money: Payments, Crypto, and Stablecoins
* 56:59 Proof of Human, Internet Bots, and the Drone Problem
* 01:06:12 AI, Management, and the Return of Founder-Led Companies
* 01:12:23 Why the Real Economy May Resist AI Longer Than Expected
* 01:15:53 Closing Thoughts
Transcript
Marc: Something about AI that causes the people in the field, I would say, to become both excessively utopian and excessively apocalyptic. Having said that, I think what’s actually happened is an enormous amount of technical progress that built up over time. And like for, for example, we now know that neural network is the correct architecture.And I, I will tell you like there was a 60 year run where that was like a, you know, or even 70 years where that was controversial. And so, so the way I think about what’s happening is basically, I think, I think about basically the, the, the period we’re in right now is it’s, I call it 80 year overnight success, right?Which is like, it’s an overnight success ‘cause it’s like bam, you know, chat GPT hits and then, and then oh one hits, and then, you know, open claw hits and like, you know, these are open, these are, these are like overnight, like radical, overnight transformative successes, but they’re drawing on an 80 year sort of wellspring backlog, you know, of, of, of, of ideas and thinking it’s not just that it’s all brand new, it’s that it’s an unlock of all of these decades of like very serious, hardcore research.If I were 18, like this is a hundred, this is what I would be spending all of my time on. This is like such an incredible conceptual breakthrough.swyx: Before we get into today’s episode, I just have a small message for listeners. Thank you. We will not be able to bring you the ai, engineering, science, and entertainment contents that you so clearly want if you didn’t choose to also click in and tune into our content.We’ve been approached by sponsors on an almost daily basis, but fortunately enough of you actually subscribed to us to keep all this sustainable without ads, and we wanna keep it that way. But I just have one favor to ask all of you. The single, most powerful, completely free thing you can do is to click that subscribe button.It’s the only thing I’ll ever ask of you, and it means absolutely everything to me and my team that works so hard to bring the in space to you each and every week. If you do it, I promise you will never stop working to make the show even better. Now, let’s get into it.Alessio: Hey everyone, welcome to the Lidian Space Pockets. This is CIO, founder Kernel Labs, and I’m joined by s Swix, editor of Lidian Space.swyx: Hello. And we’re in a 16 Z with a, uh, mark G and welcome.Marc: Yes, yes. A and what, half of 16? Something like that. A one. Exactly,swyx: exactly. Uh, apparently this is the, the final few days in your, your current office.You’re moving across the road.Marc: Uh, we’re, yeah. We have a, we have some, we have some projects underway, but yeah, this is actually, oh, this is the original. We’re in actually the original office. We’re in the, we’re in the, we’re, we’re in the whole thing.swyx: It’s beautiful. Yeah. Great.Marc: Thank you.swyx: So I have to come out, uh, this is a, you know, I wanted to pick a spicy start in October, 2022.I just made friends with Roone and, uh, I wanted to give him something to sort of be spicy about. And I said, uh. Uh, it’ll never not be funny. The A 16 Z was constantly going. The future is where the smart people choose to spend their time and then going deep into crypto and not in ai. And that was in October 22nd, 2022.And Ruen says there was an internal meeting in a 16 Z to reorient around Gen ai. Obviously you have, but was there a meeting? What, what was that?Marc: I mean, I don’t, look, I’ve been doing AI since the late eighties.swyx: Yeah.Marc: So I, I don’t know, like all that, as far as I’m concerned, this stuff is all Johnny cum lately.Yeah. You, I mean, look, we’ve been doing ar entire existence. I mean, we’ve been doing AI machine learning deep, you know, deeply. We’ve been doing this stuff way from the beginning. Obviously a AI is just core to computer science. I, I, I actually view them as like quite, uh, quite continuous. Um, you know, Ben and I both have computer science degrees.Um, you know, we, we both, Ben, Ben and I actually both are world enough to remember the actual AI boom in the 1980s. Yeah. There was like a, there was a big AI boom at the time. Um, and there was a, was names like expert systems. Um, and they of like lisp and lisp machines. Uh, I, I coded in lisp. I was coding a lisp in 1989.When that was the, the language of the AI future. Um, yeah. So this is something that we’re like completely, you completely comfortable with. I’ve been doing the whole time and are very enthusiastic aboutswyx: is there a strong, like this time is different because, uh, my closest analog was 20 16 17. It was an AI boom.Mm-hmm. And it petered out very, very quickly. Um, we, it just, it just in terms of investingMarc: sort of, sort of,swyx: yeah. Investment, investment excitement.Marc: Although that’s really when the, the, the Nvidia phenomenon really, it was, I would say it was in that period when it was very clear that at, at the time it, the vocabulary was more machine learning, but it, it was very clear at that time that machine learning was hitting some sort of takeoff point.Alessio: Yeah.Marc: Well, and as you guys, you guys have talked about this at length on, on your thing, but, you know, if you really track what happened, I think the real story is, it was, it was the Alex net, uh, basically breakthrough in like 2013. That was the, that was the real knee in the curve. Um, and then it was obviously the transformer breakthrough in 17.Alessio: Yeah.Marc: Um, and then everything that followed. But, but, you know, look, machine learning, you know, there were, you know, look, uh, I mean look, I’ve been working, you know, I’ve been working with, uh, one of my, you know, kind of projects working with Facebook since 2004. Um, and on the board since 2007, and of course, you know, they, they started using machine learning very early, um, and, you know, have used it basically, you know, for like 20 years for, you know, content, you know, feed optimization and advertising optimization.And obviously many, you know, financial services. You know, many, many, many companies, many different sectors have been doing this. And so it’s like one of these things, it’s like, it’s not a, it’s not a single thing. Like it’s, it’s like, it’s like layers, right? Yeah. Um, and, and the layers arrive at different paces and, but they kind of build up.swyx: Yeah.Marc: Uh, they kind of build up over time and then, and then, yeah. And then look, in retrospect, it was 2017 was kind of the, you know, the key, the key point with the trans transformer and then. And then as you guys know, there was this really weird like four year period where it’s like the, the transformer existed and then it was just like,swyx: let’s go.Yeah.Marc: Well, but, but it was just, but, but between 2020, but between 2017 and 2021, I mean, that was the era of which like companies like Google had internal chat Botts, but they weren’t letting anybody use them.swyx: Yeah.Marc: Right. And then, you know, and then OpenAI developed Chat GT or GPT two, and then they told everybody, this is way too dangerous to deploy.Right. Yeah. You know, we can’t possibly let normal people, normal people use this thing. And then you, you guys, I’m sure remember AI Dungeon, um mm-hmm. So the o for, there was like a year where like the only way for a normal person to use GP T three was in, in AI dungeon.Alessio: Yeah.Marc: And so you, you, we would do this, you’d go in there and you’d pretend to play Dungeons and Dragons.In reality, you’re just trying to talk to talk to GPT. And so there was this, you know, there was this long, you know, and I, you know, the big, big companies, you know, big companies are cautious and, you know, the big companies were cautious. It, it, by the way, it took open ai. You know, they, they, they talk about this, it took open AI time to actually adjust, you know, kind of re redirect their researchswyx: path.I, I think, uh, let say Rosewood, right? Uh, the, the dinner that founded OpenAI was right there.Marc: Right, right. But that, that dinner would’ve taken place in 20swyx: 18Marc: 19. The formation of OpenAI Uhhuh as late as 2018.swyx: Uh, uh, sorry. Uh, no, I’m, I’m, I’m, I’m wrong. Probably It should be 20. Yeah. They just celebrated a 10 year anniversary, so it it is 2025.Yeah, so, so 2015?Marc: Yeah. 2015. Yeah. 2015. But then, uh, um, Alec Radford did G PT one in what, probablyswyx: mm-hmm. 17, 18,Marc: yeah. 17, 18. So it, yeah. For, and then, and then they didn’t really, and then GPT three was what? 2020? 2020.swyx: 2020.Marc: Because that became copilot immediately. Even open ai, which has been, you know, the leader of, of this thing in the last decade, you know, e even they had to adapt and, and, and lean into the new thing.And so. Um, yeah, I, I think it’s just this process of basically sort of wave after wave layer after layer, you know, building on itself. And then you kind of get these catalytic moments where, where the whole thing pops and, and obviously that’s what’s happening now.swyx: Is it useful to think about will there be any ai, winter?‘cause there’s always these patterns. Like, is this, in the summer is something I constantly think about because do I get, do I just like. Just get endlessly hyped and just trust that I will only be early and never wrong or right. Well, are we, will there be a winter?Marc: So there’s something about, say the following.There’s something about AI that has led to this repeated pattern. Um, and, and, and you guys know this,swyx: it’s summer, winter, summer,Marc: winter, summer, winter, summer, winter. And it goes back 80 years. Yeah. 80 years. Uh, so the original neural network paper was 1943. Right. Which is, which is amazing. Uh, that it was, it was far back that long.And then there was you, if you guys have ever talked about this on your show, but there was this, uh, there was a big, uh, there was an a GI conference at Dartmouth University in 1950. 55. 55, yeah. And they got a NSF grant to, uh, for the, all the AI experts at the time to spend the summer together. And they figured if they had 10 weeks together, they could get a GI, uh, at the other end.And they got their, by the way, they got the grant, they got the 10 weeks and then, you know, 1955, you know. No, no. A GI. And like I said, I, I lived through the eighties version of this where there was a big, a big boom and a crash. And so, so there is this thing, and there, there is something about AI that causes the people in the field, I would say, to become both excessively utopian and excessively apocalyptic.Um, and, and it’s probably on both sides of like the, the, the boom bus cycle. You, you kind of see that play out. Having said that, I think what’s actually happened is like just, and you know, and we now know in retrospect like an enormous amount of technical progress that built up over time. And like for, for example, we now know that neural network is the correct architecture.And I, I will tell you like there was a 60 year run where that was like a, you know, or even 70 years or that was controversial. And, and we now know that that’s the case. And so we, we now, you know, everything we’re building on today just sort of derives from the original idea in 1943. And so, so in retrospect, we, we now know that like, these, these guys are right.They, they, you know, they would get the timing wrong and they thought, you know, capabilities would arrive faster, or they were, it could be turned into businesses sooner or whatever, but like, they were fundamentally, the, the scientists who worked on this over the course of decades were fundamentally correct about what they were doing.And, and the, and the payoff from, from, from all their work is happening now. And so, so the way I think about what’s happening is basically, I think, I think about basically the, the, the period we’re in right now is it’s, I call it 80 year overnight success, right? Which is like, it’s an overnight success.‘cause it’s like bam, you know, chat, GPT hits and then, and then oh one hits, and then, you know, open claw hits and like, you know, these are open, these are, these are like overnight, like radical, overnight transformative successes, but they’re drawing on an 80 year sort of wellspring backlog, you know, of, of, of, of ideas and thinking it’s not just that it’s all brand new, it’s that it’s an unlock of all of these decades of like very serious, hardcore research.Um, and thinking, and look, there were AI researchers who spent their entire lives. They got their PhD. They, they worked for, they’ve researched for 40 years. They retired in a lot of cases, they passed away and they never actually saw it work.swyx: Yeah. It’s all sad.Marc: It is. It is sad. It’s sad. Knewswyx: Jeff Hinton was like the last guy.Marc: Yeah. Yeah. Well, there were the guys, uh, was a guy, Alan Newell. I mean, there’s tons of John McCarthy. You know, John McCarthy was like one of the inventors in the field. He’s one of the guys who organized the Dartmouth Conference and you know, he taught at Stanford for 40 years. Wow. And passed, you know, passed away, I don’t know, whatever, 10, 10 years ago or something.Never, never actually go. Got to see it happen. But like, it is amazing in retrospect, like, these guys were incredibly smart and they worked really hard and they were correct. So anyway, so then it’s like, okay, you know, say history doesn’t repeat, but it rhymes. It’s like, okay, does that mean that there’s gonna be another, like, you know, basically boom buzz cycle.And I, I will tell you, like, let, like in a sense, like yes, everything goes through cycles and, you know, people get overly enthusiastic and overly depressed and there’s, there’s a time, there’s a timelessness to that. Having said that, there’s just no question. Um, so the form, the foremost dangerous words in investing this time are, this time is different.Do you know the 12 most dangerous words investing? No. The four most d foremost dangerous words in investing are this time is different. Yeah. Um, the 12 most dangerous words. And so like, I’ll tell you what’s different. Like now it’s working like, like there’s just no, I mean, look, there’s just no question.And by the way, I, I’ll just give you guys my take. Like L LLMs, like from, from basically the Chad G PT moment through to spring of 25. I think you could still, I think well intention, well, and of. Form skeptics could still say, oh, this is just pattern completion. And oh, these things don’t really understand what they’re doing.And you know, the hall hallucination rates are way too high. And, you know, this is gonna be great for creative writing and creating, you know, Shakespeare and so sonnets and, you know, as, as rap lyrics or whatever, like, it’s gonna be great and all that stuff, but we’re not gonna be able to harness this to make this relevant in, you know, coding or in medicine or in law or in, you know, you know, kind of feels that, you know, kind of really, really matter.And I think basically it was the reasoning breakthrough. It, it was oh one and then R one that basically answered that question basically said, oh no, we’re gonna be able to actually turn this into something that’s gonna work in the real world. And, and then obviously the coding breakthrough over the, over basically the coding breakthrough that kind of catalyzed over the holiday break was kind of the third step in that.Mm-hmm. Where you’re just like, alright, if, if, you know, if Linus Tova is saying that the AI coding is no better than he is like. Like, that’s, that’s never happened before. That’s theswyx: benchmark.Marc: Yeah. That’s never happened before. And so now we know that it’s, it’s gonna sweep through coding and, and then, and then we, we know, you know, we know that if it’s gonna work in coding, it’s gonna work in everything else.Right. It’s just then, because that’s, that’s like, that’s like, that’s like the hardest in many ways. That’s the hardest example. And how everything else is gonna be a, a derivative of that. And then on top of that, we just got the agent breakthrough, you know, with Open Claw, which is fantastic. Which is amazing and incredibly powerful.And then we just got the, the, um, the auto research, uh, you know, the, the self-improvement. You know, we’re now into the self-improvement breakthrough. And so the, so the way I think about it is we’ve had four fundamental breakthroughs in functionality, l OMS reasoning, uh, agents, um, and then, uh, and, and then now RSI, um, and, and they’re all actually working.Um, and so I’m, I’m just, as you like, you can tell I’m jumping outta my shoes. Like, like this is, like this is it like this, this is the culmination of 80 years worth of worth of work, and this is the time it’s becoming real.Alessio: Yeah.Marc: I, I’m completely convinced.Alessio: I think the anxiety that people feel is like during the transistor era, yet Mors law, and it’s like, all right, we understand why these things are getting better.We understand the physics of it. Yeah. With ai, it’s. It’s so jagged in like the jumps where like, like you said, it’s like in three months you have like this huge jump like, and people are like, well this can keep happening. Right? But then it keeps happening,Marc: it’ll keep happening.Alessio: And so like how do you think about also timelines of like what’s we’re building?I think we always have this question with guests, which is like, you know, should you spend time building harness for a model versus like the next model just gonna do it one shot in the lead space. Right. And how does that inform, like how you think about the shape of the technology? You know, you talk about how it’s a new computing platform.If you have a computing platform, then like every six months it like drastically changes in what it looks like. It’s hard to build companies on top of it.Marc: Yeah. So, so a couple things. So one is like, look, the, the Moore’s law was what we now call a scaling law. Like Moore’s Law was a scaling law and for your younger viewers, more Moore’s Law was every chip chip chips either get twice as powerful or twice as cheap every, every 18 months.And that, and that and that, you know, that it’s gotten more complicated in the last few years. But like that, that was like the 50 year trajectory of, of, of the computer industry. And then, and then by the way, and that’s what took the mainframe computer from a $25 million current dollar thing into, you know, the phone in your pocket being, you know, a million times more powerful than that.Like that, you know, for, for 500 bucks. And so that, that was a scaling law. And then, and then, and then key to any scaling law, including Moore’s Law and the AI scaling laws is, you know, they’re not really laws, right? They’re, they’re, they’re, they’re predictions, but when they work, they become self-fulfilling predictions because they, they, they, they, they set a benchmark and, and then the entire industry, right?All the smart people in the industry kind of work to make sure that, that, that actually happens. And so they, they kind of motivate the breakthroughs that are required to, to keep that going. And, and in and in chips, that was a 50 year, that was a 50 year run. Right. And it, it was amazing. And it’s still happening in, in some areas of, of chips.I think the same thing is happening with the, the core scaling laws. The core scaling laws. In, in, in ai, you know, they’re, they’re not really laws, but like they, they are basically. There are predictions and then they’re motivating catalysts for the research work that is required to be. And, and, and, and by the way, also the investment, uh, dollars, um, uh, you know, required to basically keep, you know, keep the curves going and, and look, it, it is, it’s gonna be complicated and it’s gonna be variable and they’re, you know, there’re gonna be walls that are gonna look like they’re fast approaching, and then they’re gonna be, you know, engineers are gonna get to work and they’re gonna figure out a way to punch through the walls.And obviously that’s, you know, that’s been happening a lot, you know, and then look, there’s gonna be times when it looks like the walls have, you know, the, the, the laws have petered out and then they’re gonna, they’re gonna pick up again and surge and then, and then, and then it, it appears what’s happening to the eyes is there’s not multiple, you know, multiple scaling laws.Um, there’s multiple areas of improvement. And, and I think, you know, I don’t know how many more there are already yet to be discovered, but there are probably some more that we don’t know about yet. You know, they, like, for example, there’s probably some scaling law around, um, world models and robotics that we don’t fully understand, you know, kind of acquisition of data at scale in the real world that we don’t fully understand yet.So that, that, that one will probably kick in at some point here. There’s a bunch of really smart people working on that. Um, and so, yeah, I, I think the expectation is that, that, you know, the, the scaling laws generally are gonna continue. Yeah. The, the pace of improvement will continue to move really fast.Um. To your question on like what to build. So, uh, I’m a complete believer the scaling laws are gonna continue. I’m a complete believer the capabilities are gonna keep getting amazing, um, you know, leaps and bounds. Uh, the part where I kind of part ways a little bit with how, what I would describe as the AI purists, um, you know, which is, which I would characterize as like the people who are.In many ways, the smartest people in the field, but also the people who spend their entire life, like at a lab, um, and have, have, I would say, have very little experience in the outside world. Um, the, the, the nuance I would offer is the outside world of 8 billion people and institutions and governments and companies and economic systems and social systems is really complicated.Um, and, um, and doesn’t, you know, it it 8 billion people making collective decisions on planet Earth is not a simple process of like, just like you see this happening now. It’s like a bunch of AI CEOs have this thing, which is just like, well, there’s just this, they just all have this kind of thing when they talk in public where they’re just like, well, there’s these, these obvious set of things that so society to do.Alessio: Mm-hmm.Marc: And then they’re like, society’s not doing any of those things. Right. And it’s like, how can society not, you know, what, whatever their theory is, how can society not see x, y, Z? Mm-hmm. And the answer is, well, society is number one. There’s no single society, it’s like 8 billion people. And they like all have a voice, and they all have a vote, like at the end of the day of how they, they react to change.And then, you know, it just like, it’s just human reality is just really complicated and messy. Um, and, and, and so the specific answer to your question is like, as usual, it depends. Um, you know, it, it depends. Look, pe there’s no question people are gonna, like, there’s no question they’re gonna be companies.It’s already happening. There are companies that think that they’re building value on top of the models and then they’re just gonna get blissed by the, by the next model. There’s no question that’s happening. But I think there’s no question also that just the process of adaptation of any technology into the real and into the real messy world of humanity is, is just going to be messy and complicated.It’s, it’s not going to be simple and straightforward. It’s gonna be messy and complicated. And there are gonna be a lot of companies and a lot of products, um, uh, and in, in fact entire industries that are gonna get built to, to, to basically actually help all of this technology actually reach real people.Alessio: The amount of capital going into these companies, I mean, Dario talked about it on the Door Cash podcast and Door Cash was like, why don’t you just buy 10 x more GPUs? And he is like, because I’m gonna go bankrupt if the model doesn’t exactly hit the, the performance level. How do you think about that?Also as a risk on, you know, you guys are investors, open AI and thinking machines and world apps. It seems like we’re leveraging the scaling loss at a pretty high rate, right? Like how comfortable, I guess, do you feel with the downside scenario, like, and say like things Peter out, you think you can kind of like restructure uh, these build outs and uh, you know, capital investments.Marc: Yeah. So should start by saying, so I live through the.com crash, um, and I can tell you stories for hours about the.com crash and it was horrible. No, it was awful. It was, it was, it was apocalyptic by the way. The, a lot of the.com crash was actually at the time, it was actually a telecom crash. It was a bandwidth crash.Like the, the thing that actually crashed, that wiped out all the money with the tele, the telecom companies.swyx: GlobalMarc: crossing. Global, global, yeah.swyx: I’m from Singapore and they, they laid so much cable o over over our oceans.Marc: Actually there was a scaling law in the.com. Era. And it was literally the, the US Commerce Department put out a report in 1996 and they said internet traffic was doubling every quarter.Um, and, and actually in 1995 and 1996, internet traffic actually did double every quarter. And so that became the scaling law. And so what all these telecom entrepreneurs did was they went out and they raised money to build fiber, anticipating that the demand for bandwidth is gonna keep doubling every quarter.Doubling every quarter though is like, you know, grains of chess and the chessboard, like at some point the numbers become extremely large. Right. And, and, and it really, and really what happened was the internet. The internet by the way, continuously kept growing basically since inception. And it’s, you know, it’s, it’s continuously grown.It’s never shrunk. And it’s grown really fast compared to anything else. Mm-hmm. You know, in, in, in human history. But it wasn’t doubling every quarter as of 19 98, 19 99. And so there was this gap in the expectation of what they thought was a scaling law versus reality. And that’s actually what caused the.com crash, which was the, it they, they way over companies like global crossing way overbuilt fiber, which is sort of the, and by the way, fiber, telecom equipment, you know, so all the, all the networking gear, you know, and then, and then by the way, the actual physical data centers, like that was the beginning of the, of the, of the data center build and then, and the data center overbuild.And so you had that, but it was, it was literally, I think it was like $2 trillion got wiped out, right? It was like Jesus, it was like a big, it was. And by the way, the other, the other subtlety in it was the internet companies themselves never really had any debt. ‘cause tech, tech companies generally don’t run on debt, but the telecom companies run on debt.Physical infrastructure companies run on debt. And so the companies like Global Crossing not just raise a lot of equity, they also raise a lot of debt. So they’re highly levered. And so then you just do the thing. It’s just like, okay, you have a highly levered thing where you’re, you’re just over, you’re overbuilding capacity.Demand is growing, but not as fast as you hoped. And then boom, bankrupt. Right. And, and then it, and then it’s like they say about the hotel industry, which is, it’s always the third owner of a hotel that makes money. It has to go bankrupt twice, right? You have to wash out all of the over optimistic exuberance before it gets to actually a stable state.And then it makes money. So by the way, all of those data centers and all of those, all the fiber that they’re in use, it’s all in use today. Yeah. But 25 years later. But it, it, it took, and actually the elapsed time was, it took 15 years. It took 15 years from 2000 to 2015 to actually fill, fill up all that capacity.The cautionary warning is the, the overbuild can happen. Um, and, and, and, and, you know, you, you get into this thing where basically everybody, everybody who basically has any sort of institutional capital, it’s like, wow. It’s just, I, I don’t know how to invest in these crazy software things. For sure I can put build data centers and for sure I can buy GPUs that I can deploy, you know, compute grids and, and all these things.Um, and so, you know, if you’re a pessimist, you could look at this and you could say, wow, this is like really set up to be able to basically replicate, you know, what we went through, what we went through in 2000. Obviously that would be bad. The counter argument, which is the one I I agree with, which is the counter on, on the other side is a couple things.One is the companies that are investing all the, the companies that are investing the money are like the bluest chip of companies. And so back, back, back in the, in the do, like Global Crossing was like a, it was like an entrepreneur. It was like a, a new venture, but like the money that’s being deployed now at scale is Microsoft, and, you know, and Amazon and Google, Facebook and Facebook and Nvidia and, you know, these, these, these, and, and now you know, by the way, open ai philanthropic, which are now at like, you know, really serious size, um, you know, as companies with, you know, very serious revenue.These are very large scale companies with like, lots, lots of cash, lots of debt capacity that they’ve, they’ve never used. And so th this is institutional in a way that, that really wasn’t at the time. And then the other is, at least for now, every dollar that’s being put into anything that results in a running GPU is being turned into revenue right away.Like so, and you guys know this, like everybody’s starved for capacity, everybody’s starved for compute capacity and then, you know, all the associated things, memory and, and, and interconnected and everything else. Um, data center space. And so e every dollar right now that’s being put into the ground is turning into revenue.And, and it, and in fact, I actually think there’s an interesting thing happening, which is because everybody starve for capacity, the models that we actually have that we can use today are inferior versions of what we would have if not for the supply constraints. That’s true. Um, if Right pose a hypothetical universe in which GPUs were 10 times cheaper and 10 times more plentiful mm-hmm.The models would be much better. ‘cause you would just allocate a lot more money to training and you’d just build better models and they would be better. Um, and so we’re, we’re actually getting the sandbag version of the technology.swyx: Yeah. No. Everything we use is quantized because the, the labs have to keep the, the full versions,Marc: right?swyx: LikeMarc: we’re not even getting the good stuff.swyx: Yeah.Marc: But, but getting the good stuff, it’s, it’s just, even if technical progress stops. Once there’s like a much bigger build of like GPU manufacturing capacity and memory, you know, all, all the things that have to happen in the course of the next five or 10 years.Once it happens, even the current technology is gonna get, gonna get much better. And then as you know, like there’s just like a million ways to use this stuff. Like there’s just like a million use cases for this. Mm-hmm. Like, it, it, you know, this isn’t just sending packets across a, a thing, whatever, and hoping that people find something to do with it.This is just like, oh, we apply intelligence into every domain of human activity. And then it works like incredibly well. Yeah. Um. Here’s what I know, here’s what I know. Um, in the next three or four year, it’s like somewhere between three or four years out, basically everything is selling out. So like the, the entire supply chain is, is, is, is sold out or, or, or selling out.And so there, there’s no, like, we’re just gonna have like chronic supply shortage for, you know, for years to come. Um, there’s going to be a response from the market that’s gonna result in an enormous, you know, it’s happening now. An enormous flood of investment in a new fab capacity and ev you know, every, everything else to be able to do that, at some point the supply chain constraints will unlock, you know, at least to some degree that will be another accelerant to industry growth when that happens.‘cause the products will get better and everything will get cheaper. Um, and so, so I know that’s gonna happen. I know that, you know, the deployments, you know, the, the actual use cases are like really compelling. And then, like I said, you know, with reasoning and agents and so forth, like, I know they’re just gonna get like much, much better from here.And so I, I, I know the capabilities are like really real and serious. I also know that the technical progress is not going to stop. It. It, it is excel. It is, is accelerating. Like the, the breakthroughs are are tremendous. I mean, even just month over month, the breakthroughs are really dramatic. And so, you know, I think if you were a cynic and there, there are cynics, you can look at 2000, you can find echoes.But I can’t even imagine betting it that this is gonna like somehow disappoint and, you know, at least for years to come, I think it would be essentially suicidal to make that bet. Yeah. Um, it was that Michael Burry, uh, uh, that’sswyx: anMarc: interesting guy, huh? We’ll pick on a guy. We’ll pick, let’s pick on one guy.We’ll pick. Well ‘cause he did, he he came out with, it was, it was the, heswyx: doesn’t mind.Marc: It was the Nvidia short. Right. He came with the Nvidia short. And then if you guys probably talked about this, which is the, the analysis now that like the current models are getting better faster at such a rate that if you are running an Nvidia, if you’re running an Nvidia inference chip today, that’s three years old, you’re making more money on it today than you did three years ago because the pace of improvement of the software is, is faster than the, the, the depreciation cycle, the chip.And then my understanding is Google is running. I don’t if they’ve, I don’t know exactly what, uh, these are rumors that I’ve heard or maybe it’s public, but, um, I think Google’s running very old TPUs, very profitably. Ference. Yeah. And very profit and very profitably. Yeah. Um, and so, so it actually turns out, as far as I can tell, it’s actually the opposite of the Beery thesis is actually.He was actually 180 degrees wrong. It’s actually the, the, the, the old Nvidia chips are getting more valuable, which is something that’s like literally never happened before. Like it’s never been the case that you have an older model chip that becomes more valuable, not less valuable. And that, and again, that’s an expression of the just ferocious pace of software progress.Ferocious pace of capability payoff. Yeah. Uh, that you’re getting on the other side of this. And so I just, the idea of betting against that, like.swyx: Yeah. Yeah. Well, one ofMarc: my, it seems like an invitation to get your face ripped up.swyx: One of my early hits was like modeling the lifespan of the H 100 and h two hundreds and, and going like, you know, usually they advise like four to seven years and it was, you know, maybe you sort of realistically haircut cut it down to two to three.Yeah. But actually it’s going up and not down. Yeah. And, and uh, that’s, I mean that’s, I think that’s the dream. Uh, we are finding utilization and I think utilization solves all problems. Like, you can, you can find use, use cases for even like the poor, like even memory, we’re having a shortage. Right. And, and even like the, the shittier versions of, of memory that we do have, we are finding use cases for it.So like That’s great.Marc: Yeah.Alessio: How, how important is open source AI and kinda like edge inference in a world in which you have three years of supply crunch. Like, do you think in the, like, you know, if you fast forward like five years, like how do you think about inference, uh, in the data center versus at the edge?Marc: Well, so just to start, yeah. So I think, I think open source is very important for a bunch of reasons. I think edge, edge inference is very important for a bunch of reasons. I, I think just practically speaking, if we’re just gonna have fundamental construc, supply crunches for the next, I mean, you, you guys know if you just project forward demand over the next three years, right?Yeah. Relative to supply, one of the, its main predictions you can do is what’s gonna, what, what’s gonna happen to the cost of, of inference in the core, uh, over the next three years? And like, it may rise dramatically, right? Like, so, so what is, and then is, is, you know, like the, the, the big model competition are subsidizing heavily right now.Right? Right. And so, so what’s the, what will be the average person’s, you know, per day, per month token cost, you know, three years from now to do all the things that they want to do. And I, I don’t know, it’s gonna. I mean, I have, you guys probably have friends, I have friends today who are paying a thousand dollars a day for open claw, for claw tokens to run open claw.Right? And so, okay. $30,000 a month. Right? And, and by the way, those, those friends have like a thousand more ideas of the things that they want their claw to do, right? Yeah. And so you, you could imagine there, there’s like latent demand of up to, I don’t know, five or $10,000 a day of, of, of tokens for a fully deployed, you know, per personal agent.Uh, and obviously consumers can’t pay that, right? And so, so, but it gives you a sense of the fu of the fu of the future scope of demand, right? And so, so even, even if there’s a 10 x improvement in price performance, that still, you know, goes to a hundred dollars a day, which is still way beyond what people can pay.Mm-hmm. So there’s just gonna be like. Ferocious to me, by the way. The agent thing, the other interesting thing is I think the agent thing, so up until now, a lot of the constraints of GGPU constraints, I think the agent thing now also translates into CPU constraints. Mm-hmm. Right?swyx: CPU memory.Marc: Yes. CPU memory, right?And so, like the entire chip ecosystem is just gonna get wait,swyx: wait for network constraints, that that will be the killer.Marc: It’s all bottleneck potentially for years. And so, so I, I think that Brad, and, and I think it’s actually possible, I mean, generally inference costs are gonna keep coming down, but I think the, let’s put it this way, the rate of decline, I think may level out here for a bit because of these supply constraints.And then at some point, maybe the lab stops subsidizing so much and that, that, that again, will be, be an issue. And so there’s just gonna be so much more demand for inference than, than can be satisfied. Um, you know, kind of with the centralized model. And then, and then, you know, you guys know this, but like all the, just the dramatic, I mean just the dramatic innovations that have happened in the Apple silicon to be able to do, uh, inferences, it’s quite amazing the level of effort being put.Like the open source guys are putting incredible effort into getting, you know, this recurring pattern where the big model will never run on a pc, and then six months later mm-hmm. Oh, it runs in a pc, right? It’s like amazing. And there’s very smart people working on that. So there’s all that. And then look, there’s also, you know.There’s also like other, there’s other motivators. There’s other motivators which is just like, okay, how much trust are the big centralized model providers? You know, how much trust are they building in the market versus, you know, how much are, you know, at least for, in certain cases with some people, for certain use cases, people being like, well, I’m not willing to just like, turn everything over.So there, there, there’s all the trust issues. Um, by the way, there’s also just like straight up price optimization. There’s many uses of AI where you don’t need Einstein in the cloud. You just need like a, a a, a smart local model. There’s also performance issues where you want, you know, you want, you know, you’re gonna want your doorknob to have an AI model in it.Right. You know, to be able to, you know, do, um, you know, to be able to do access control. Um, obviously like everything with a chip is gonna have an AI model in it. Mm-hmm. And it, a lot of those are gonna be local. Um, and so, yeah. No, like I think, I think you’re gonna have ti and then you’re gonna, by the way, also wearable devices, you know, you don’t wanna do a complete round trip.You want, you know, you, whatever your smart devices are, you want it to be like super low latency. Yeah.swyx: The question, do we care who makes it? Yeah. One of the biggest news this week was the collapse of AI two, the Allen Institute. Mm-hmm. One of the actual American open source model labs. Yeah. Um, and, uh, I’m not that optimistic on, on American open source.Yeah. Like you, you guys invested in MIS trial and MIS trial’s doing extremely well outside of China. That’s about it.Marc: Yeah. We’ll see. We’ll see. I look, I, number one, I do think we care. Uh, I do think we, I do think we care who makes it. Um, I would say this, the, the, the, the previous presidential administration wanted to kill it in the us Oh yeah.They wanted to drown in the bathtub. Um, and so they wanted to kill it. So at least we have a government now that actually like, actually wants it wants it to happen. And youswyx: earned to councilMarc: and Yeah. And the new and the P pcast. Yeah. So the, the, you know, this admin for whatever other political issues people have, which are many, you know, this administration has, I think a very enlightened view and in particular an enlightened view on AI and in particular on open source ai.Uh, and so they’re very supportive. Um, my read is the Chi. The Chinese have a very, the various Chinese companies have a very specific reason to do open source, which is, they, they, they don’t fundamentally, they don’t think they can sell commercial, uh, AI outside of China right now. And or at least specifically not, not in the US for a combination of reasons.And so they, they kind of view, I think, open source AI as a bit of a loss leader against basically domestic, uh, you know, paid, paid services. And then kind of an, you know, kind of an ancillary products. You know, they’re, they’re very excited about it, by the way. I think it’s great. I think it’s great that they’re doing it.Um, you know, I think Deeps seek was like a gift to the world. Um, I think. The great thing about open source, open source, the, the, the impact of open source is felt two ways. One is you, you get the software for free, but the other is you get to learn how it works, right? And so like the paper, the paper, the paper and, and the code, right?And the code. And so, like, for example, I thought this was amazing. So open comes out with L one and it’s an amazing technical breakthrough, and it’s just like, absolutely fantastic. But of course they don’t explain how it works in detail. And then of course they hide the, they hide the reasoning traces, right?And, and then, and then, and then everybody’s like, okay, this is great, but like, who’s gonna be able to replicate this? Are other people gonna be able to do this? You know, is their secret sauce in there? And then our one comes out and it’s just like, there’s the code and there’s the paper, and now the whole world knows how to do it.And then, you know, three months later, every other AI model is, is adding reasoning. And so, so you get this kind of double, like even if the Chinese models themselves are not the models that get used, the education that’s taken place to the rest of the world, the information diffusion, you know, is incredibly powerful.So that happens and then, I don’t know. We’ll, we’ll see. You know, there are a bunch of American, you know, open source, you know, ai, uh, model companies. I mean, look, there’s gonna be tremendous, you know, there already is. There’s, you know, there’s gonna be tre there’s tremendous competition, uh, among the primary model companies.You know, there’s, depending on how you count, there’s like four or five, you know, big co model companies now that are, you know, kind of neck and neck, uh, in different ways. Um, uh, you know, and, and, and, um, you know, and then obviously Bo Bo both X and then MetAware involved are, you know, both have huge, you know, huge attempts to, you know, kind of, to kind of leapfrog underway.And then you’ve got, you know, a whole fleet of startups, new companies, including a whole bunch that we’re backing, that are, you know, trying to come out with different approaches. And then you’ve got whatever it is. I don’t know how, how many, how many, like main line foundation model companies are there in China at this point?It’s probably six. It’sswyx: five Tigers is what they call it. Yeah. Uh, Quinn is in questionable because there’s change in leadership,Marc: right?swyx: Yeah.Marc: But that, does that include, that includes like Moonshot,swyx: yes. Can deep seek, uh, uh, ZI, um, Quinn oh one is in there.Marc: Right. And then, um, and by dance and, and then you see,swyx: ance would be like the next tier ance.They weren’t as prominent. They weren’t, didn’t haveMarc: a leading. Yeah. But they, you at least, you know, ance is very inspiring and presumably they have more stuff coming and Tencent probably has more stuff coming and, and so forth. And so, so, so like, look, here, here would be a thing you can anticipate, which is there are not these markets, there are not going to be between the US and China right now, there’s like a dozen primary foundation model companies that are like at scale, at, at some level of a critical mass.It’s not gonna be a dozen in three years, right? Like, it just because these industries don’t bear a dozen, it’s, it’s gonna be three or you know, there’s gonna be three or four big winners or maybe one or two big winners. And so there’s gonna be like a whole bunch of those guys that are gonna have to figure out alternate strategies.Um, and I think like open source is one of those strategies. And so I, I think you could see like a whole, i, I, I think the questions like, who’s gonna do open source? I think that could change really fast. I, I think that, that, that’s a very dynamic thing. I think it’s very hard to predict what happens. And, and I think it’s very important.swyx: NVIDIA’s doing a lot.Marc: Well, I was gonna say. Well, exactly. And then you’re got Nvidia and then, and then, you know, just to, again, indu, there’s an old thing in business strategy, which is called, uh, commoditize Compliments. Commoditize the compliment. That’s right. And so if your Jensen is just kind of obvious, of course, you wanna commoditize the software.Yeah. And he’s, and to his enormous credit, he’s putting enormous resources behind that. And so maybe it, maybe it’s literally Nvidia and I think that would be great.Alessio: Yeah. Uh, narrative violation to European projects, uh, in the, uh, damn.swyx: I’m hosting my, uh, Europe, uh, conference soon. And I got both of them.Alessio: They got us.They got us. MarkMarc: finished. They got us, us. Well, wait a minute. Where was Peter? So where was Steinberger when he did? In AustriaAlessio: was, yeah, yeah, yeah.Marc: He was in what? He was in Vienna. Oh, he was in Vienna. And then where is he now?swyx: Uh, he’s moving to sf.Marc: Okay. Okay. Alright. Okay, there we go. And then, yeah, the PI guy, right?The PI guys are European.swyx: Yeah, they’re also, they’re buddies inAlessio: Australia. Mario’s also there. Yeah.Marc: Right. And are they, yeah, they haven’t announced yet. Any sort of change changed or have theyAlessio: No, they’re, they have a company there.Marc: Okay. Got, okay. Good.Alessio: Good, good,good.Alessio: Um,Marc: yeah, good.swyx: Anyways, I think pie and open cloud very important software things and, and I just wanted you to just go off on what you think.Marc: Yeah. So I think in co the, the combination of the two of them I think is one of the 10 most important softwares. Openswyx: Claw got all the attention, but Right. Talk about pie,Marc: pi pie’s, kind of the Yeah. PI’s, PI’s kind of the architectural breakthrough for those of us who are older. There was this whole thing that was very important in the world of software basically from like 1970 to, I don’t know, it still is very important, but like 19, from 1973 to like basically the creation of Linux, which is basically this, this thing used to call like the Unix mindset.Like so, so, ‘cause there were all these different, you know, theories. There are all these different operating systems and mainframes and, and then you know, all these windows and Mac and all these things. And then there was this, but kind of behind it all was this idea of kind of the Unix mindset. And the Unix mindset was this thing where basically you don’t have these, like, like in the old days, like, like the operating system that like made the computer industry really work, like in the 1960s mm-hmm.Was this thing called o os 360, which was this big operating system that IBM developed that was supposed to basically run everything. And it was this like giant monolithic architecture in the sky. It was like a, you know, it was like a giant castle. Um, of software. And, and by the way, it worked really well and they were very successful with it.But like, it was this huge castle in the sky, but it was this thing, it was almost unapproachable, which is like, you had to be kind of inside IBM or very close to IBM. And you had to really understand every aspect, how the system worked. And then the, the Unix sky is originally out of at and t and then out out of Berkeley, um, you know, came out and they said, no, let’s have a completely different architecture.And the way architecture’s gonna work is we’re gonna have, we’re gonna have a, a prompt and, and a, and a shell. And then, and then we’re gonna, all, all the functionality is gonna be in the form of these discreet modules, and then you’re gonna be able to chain the modules together. Mm-hmm. Yeah. And so like the, the, the op, it’s almost like the operating, operating system itself is gonna be a programming language.Um, and then that led led to the, the, the sort of centrality of the shell. Um, and then that led to sort of, uh, you know, basically chaining together Unix tools. And then that led to the emergence of these, these scripting languages like Pearl, where you, you could basically kind of very easily do this, and then the shells got more sophisticated and then, and then, and then look like, you know, that, that, that number one, that worked and that, that was the world I grew up in.Like I was, I was a Unix guy. You know, sort of from, call it 1988 to, you know, kind of all, all the way through my work and it worked really well. It, it’s in the background, um, you know, nor normal people don’t need to, didn’t need to necessarily know about it, but like, if you were doing like system architecture, application development, you, you, you knew all about it.Um, and then, you know, it’s been in the background ever since. And, you know, look, your Mac still has a Unix shell, you know, kind of in there, and your iPhone still has a Unix shell kind of buried in there somewhere. So they’re kind of in there. And then, you know, the Windows shell is kind of a, you know, sort of a weird derivative of that.But, um, you know, but look, the inter, the internet runs on Unix, um, and that smartphones, actually, both iOS and Android are Unix derivatives. And so, you know, kind of Unix did end up winning. But, but anyway, and then we just started taking that for granted. And then, and then so, so basically the, the way I think about what happened with Pie and then with Open Claw is basically what those guys figured out is, I always say the, the great breakthroughs are obvious in retrospect, right?Which is the best kind, the best kind. They weren’t obvious at the time or somebody else would’ve done them already. Um, and so there is a, like a real conceptual leap, but then you look at it sort of the backwards looking and you’re just like, oh, of course. Mm-hmm. Like the, the, to me those are always the best breakthroughs.Well, actually language models themselves are like that. It’s just like, oh, next token completion. Oh, of course.swyx: Yeah. What other objective mattered?Marc: Yeah, exactly. But, but like it, right. But she’s even saying it wasn’t obvious until somebody actually did it. Right. And so the conceptual breakthrough is real and deep and powerful and, and very important.And so the way I think about pie and olaw is it’s basically marrying the, the language model mindset to the un to the Unix, basically shell prompt mindset. And so it’s, it’s basically this idea that what, what, so what is an agent, right? And as, as, and as you know, like many smart people who have been trying to figure out what an agent is for, for, for decades, and they’ve had many architectures to build agents and the whole thing.And i
... [Content truncated due to size limits]